183 research outputs found

    Formal verification of a fault tolerant clock synchronization algorithm

    Get PDF
    A formal specification and mechanically assisted verification of the interactive convergence clock synchronization algorithm of Lamport and Melliar-Smith is described. Several technical flaws in the analysis given by Lamport and Melliar-Smith were discovered, even though their presentation is unusally precise and detailed. It seems that these flaws were not detected by informal peer scrutiny. The flaws are discussed and a revised presentation of the analysis is given that not only corrects the flaws but is also more precise and easier to follow. Some of the corrections to the flaws require slight modifications to the original assumptions underlying the algorithm and to the constraints on its parameters, and thus change the external specifications of the algorithm. The formal analysis of the interactive convergence clock synchronization algorithm was performed using the Enhanced Hierarchical Development Methodology (EHDM) formal specification and verification environment. This application of EHDM provides a demonstration of some of the capabilities of the system

    Formal verification of AI software

    Get PDF
    The application of formal verification techniques to Artificial Intelligence (AI) software, particularly expert systems, is investigated. Constraint satisfaction and model inversion are identified as two formal specification paradigms for different classes of expert systems. A formal definition of consistency is developed, and the notion of approximate semantics is introduced. Examples are given of how these ideas can be applied in both declarative and imperative forms

    Formal Analysis for Embedded Real-Time Systems

    Get PDF
    International audienceTimed systems are notoriously hard to de-bug and to verify because the continuous nature of time allows vast numbers of different behaviors; embedded systems must often deal with faults, and these introduce another dimension of complexity. Simulation and testing provide little assurance in these domains because they can visit only a small fraction of the possible behaviors. Formal methods of analysis have some promise, but until recently they could deal only with one dimension at a time: classical model checking could cope with faults but could not model continuous time; model checkers for timed automata could deal with continuous time but not the "case ex-plosion" due to faults. Recently, a new class of "infinite bounded" model checkers has been developed; these show promise that they can cope simultaneously with both continuous time and discrete faults

    A verified model of fault-tolerance

    Get PDF
    The main objectives are: a model of a replicated system with exact-match voting; a fault model that includes transients; a theorem that establishes the conditions under which the system provides fault tolerance; a formal specification of the model; and a mechanically checked verification of the theorem that is consonant with the journal-level presentation. Formal specification and verification revealed typos in the original report, exposed omission in original proof, led to the stronger theorem and more elegant proof, and confirmed that Enhanced Hierarchical Development Methodology (EHDM) has the capability to specify interesting and useful properties in a direct, natural, and readable manner

    Specifying real-time systems with interval logic

    Get PDF
    Pure temporal logic makes no reference to time. An interval temporal logic and an extension to that logic which includes real time constraints are described. The application of this logic by giving a specification for the well-known lift (elevator) example is demonstrated. It is shown how interval logic can be extended to include a notion of process. How the specification language and verification environment of EHDM could be enhanced to support this logic is described. A specification of the alternating bit protocol in this extended version of the specification language of EHDM is given

    LR(k) sparse-parsers and their optimisation

    Get PDF
    PhD ThesisA method of syntactic analysis is developed which . . is believed to surpass all known competitors in all major respects. I The method is based upon that associated with the LR(k) grammars but is faster because it bypasses all reduction steps concerned with 'chain' productions. These are freely selected productions which are considered semantically irrelevant and whose right parts consist of just a single symbol. The parses produced by the method are 'sparse' in that they contain no references to chain productions - they are termed 'chain-free' parses. The CFLR(k) grammars are introduced as the largest class which can be -Chain-F-ree parsed from -Le-ft to Right while looking ~ symbols ahead of the current point of the parse. The properties of these grammars are examined in detail and their relationship to the conventional LR(k) grammars is explored. Techniques are presented for testing grammars for the CFLR(k) property and for constructing chain-free parsers for those grammars possessing the property. Methods are also presented for. converting ordinary LR(k) parsers into chain-free parsers. CFLR(k) parsers are more widely applicable than their LR(k) counterparts, are faster 'and provide the same excellent detection of syntactic errors. Unfortunately they also tend to be rather larger. A 'simple optimization is presented which completely'overcomes this single disadvantage without sacrificing any of the advantages of the method. These theoretical techniques are adapted to provide truly practical chain-free parsers based on the conventional SLR and,LALR parsing methods. Detailed consideration is given to use of 'default reductions' and related techniques for achd.evfng compact representations of these parsers. The resulting chain-free parsers are not only faster than their ordinary counterparts, but probably smaller too. We believe their advantages are such that they should substantially replace other parsing methods currently used in programming language compilers

    Quality measures and assurance for AI (Artificial Intelligence) software

    Get PDF
    This report is concerned with the application of software quality and evaluation measures to AI software and, more broadly, with the question of quality assurance for AI software. Considered are not only the metrics that attempt to measure some aspect of software quality, but also the methodologies and techniques (such as systematic testing) that attempt to improve some dimension of quality, without necessarily quantifying the extent of the improvement. The report is divided into three parts Part 1 reviews existing software quality measures, i.e., those that have been developed for, and applied to, conventional software. Part 2 considers the characteristics of AI software, the applicability and potential utility of measures and techniques identified in the first part, and reviews those few methods developed specifically for AI software. Part 3 presents an assessment and recommendations for the further exploration of this important area

    Formal methods and digital systems validation for airborne systems

    Get PDF
    This report has been prepared to supplement a forthcoming chapter on formal methods in the FAA Digital Systems Validation Handbook. Its purpose is as follows: to outline the technical basis for formal methods in computer science; to explain the use of formal methods in the specification and verification of software and hardware requirements, designs, and implementations; to identify the benefits, weaknesses, and difficulties in applying these methods to digital systems used on board aircraft; and to suggest factors for consideration when formal methods are offered in support of certification. These latter factors assume the context for software development and assurance described in RTCA document DO-178B, 'Software Considerations in Airborne Systems and Equipment Certification,' Dec. 1992

    Model-based reconfiguration: Diagnosis and recovery

    Get PDF
    We extend Reiter's general theory of model-based diagnosis to a theory of fault detection, identification, and reconfiguration (FDIR). The generality of Reiter's theory readily supports an extension in which the problem of reconfiguration is viewed as a close analog of the problem of diagnosis. Using a reconfiguration predicate 'rcfg' analogous to the abnormality predicate 'ab,' we derive a strategy for reconfiguration by transforming the corresponding strategy for diagnosis. There are two obvious benefits of this approach: algorithms for diagnosis can be exploited as algorithms for reconfiguration and we have a theoretical framework for an integrated approach to FDIR. As a first step toward realizing these benefits we show that a class of diagnosis engines can be used for reconfiguration and we discuss algorithms for integrated FDIR. We argue that integrating recovery and diagnosis is an essential next step if this technology is to be useful for practical applications
    corecore